Skip to content

chore(deps): update libdd-trace-protobuf digest to 950562b#122

Draft
dd-octo-sts[bot] wants to merge 10 commits intomainfrom
engraver-auto-version-upgrade/renovate/libdd-trace-protobuf-digest
Draft

chore(deps): update libdd-trace-protobuf digest to 950562b#122
dd-octo-sts[bot] wants to merge 10 commits intomainfrom
engraver-auto-version-upgrade/renovate/libdd-trace-protobuf-digest

Conversation

@dd-octo-sts
Copy link
Copy Markdown

@dd-octo-sts dd-octo-sts Bot commented Apr 22, 2026

This PR contains the following updates:

Package Type Update Change
libdd-trace-protobuf dependencies digest 986aab5950562b

litianningdatadog and others added 9 commits March 27, 2026 09:52
…t sentinel file (#81)

* Add .worktrees to .gitignore

Preparing for git worktree usage to enable isolated development
workspaces.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

* feat(lambda-lite): detect Lambda Lite and write mini agent sentinel file

- Add `is_lambda_lite()` in http_utils to detect Lambda Lite via
  `AWS_LAMBDA_INITIALIZATION_TYPE=native-http`; includes unit tests for
  all env var states (native-http, on-demand, empty, unset)
- Write `/tmp/datadog/mini_agent_ready` sentinel on startup when running
  in Lambda Lite mode so dd-trace Node.js can switch from LogExporter
  (stdout) to AgentExporter (HTTP :8126)
- Refine release profile: use fat LTO, explicit symbol stripping, and
  `panic = "abort"` for smaller binary size

In standard Lambda, dd-trace detects a running agent via the Extension
path `/opt/extensions/datadog-agent`. Lambda Lite (web function /
native-http mode) does not populate this path, and `/opt` is read-only,
so the standard detection mechanism does not apply. Without an agent
signal, dd-trace falls back to LogExporter and writes traces to stdout
where they are silently dropped.

The sentinel file at `/tmp/datadog/mini_agent_ready` is written after
the mini agent binds to :8126. dd-trace (Node.js) checks this path via
`DATADOG_MINI_AGENT_PATH` in constants.js and switches to AgentExporter
(HTTP :8126) when the file is present. `/tmp` is used because it is the
only writable directory in Lambda Lite; the parent directory
`/tmp/datadog/` is created by the serverless-compat JS layer before
this binary is spawned.

---------

Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
…cad (#107)

* update libdatadog rev to 8c88979985154d6d97c0fc2ca9039682981eacad

* update licenses
Adds .github/copilot-instructions.md to guide GitHub Copilot auto-review
toward security-relevant patterns on every PR: PII in log statements,
unsafe Rust blocks without invariant documentation, and silently swallowed
errors in network/external-input code paths.

Jira: https://datadoghq.atlassian.net/browse/SVLS-8660

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
…ges (#108)

Add two new platform targets to the datadog-serverless-compat CI pipeline:

- win32-ia32: 32-bit Windows build via native windows-2022 runner
  (i686-pc-windows-msvc, UPX-compressed)
- darwin-arm64: macOS Apple Silicon build via native macos-14 runner
  (aarch64-apple-darwin, no UPX — preserves Mach-O code signing)

Each platform adds a build step to build-datadog-serverless-compat.yml,
artifact download/processing in the package job, and an npm publish line
in the publish job of publish.yml.

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
…ss-compat (#95)

* feat(log-agent): scaffold datadog-log-agent crate with constants and errors

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* - use serde flatten with Map instead of Option<Map> to avoid deser quirk
- add generic LogEntry with flatten attributes for runtime enrichment
- add features table and fix zstd compression level docs

* feat(log-agent): add LogAggregator with size/count-bounded batch collection

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* refactor(log-agent): clean up get_batch loop to include comma bytes in tally

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat(log-agent): add AggregatorService + AggregatorHandle with channel pattern

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat(log-agent): add LogFlusher with zstd compression, retry logic, and OPW mode

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix(log-agent): align flusher tests with spec requirements

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix(log-agent): code quality improvements in flusher and config

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat(log-agent): integrate datadog-log-agent into serverless-compat binary

- Clean public re-exports in datadog-log-agent lib.rs
- Add datadog-log-agent dependency to datadog-serverless-compat
- Wire log agent startup in main.rs following DogStatsD pattern
- Respect DD_LOGS_ENABLED env var (default: true)
- Use FIPS-compliant HTTP client via create_reqwest_client_builder
- Flush logs on same interval as DogStatsD metrics
- Add integration test verifying full pipeline compiles and runs
- Update CLAUDE.md with log-agent architecture and env vars

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix(log-agent): improve main.rs wiring — early return on missing API key, use crate re-exports

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* test(log-agent): add integration test suite covering full pipeline, batching, retries, and OPW mode

* feat(log-agent): derive Clone on LogFlusher

* - add CLAUDE.md to .gitignore
- disable log agent by default — require explicit DD_LOGS_ENABLED=true
- log the actual error when reqwest client build() fails in start_log_agent
- fail fast in start_log_agent when OPW URL is empty
- apply rustfmt to integration test formatting.

* chore(log-agent): add hyper/http-body-util/hyper-util deps for log server

* feat(log-agent): add LogServer HTTP intake skeleton

* test(log-agent): add LogServer HTTP intake integration tests

* feat(serverless-compat): start LogServer HTTP intake on DD_LOGS_PORT (default 8080)

* chore: remove stale TODO comment — LogServer wires the log-ingestion endpoint

* test(log-agent): add unit and integration tests for LogServer network intake

* test(log-agent): add network intake integration tests for LogServer

Cover the full HTTP→LogServer→AggregatorService→LogFlusher→backend
pipeline, concurrent client ingestion, and error recovery after a
malformed request.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix(log-agent): treat 408/425/429 as retryable instead of permanent 4xx

All 4xx responses were previously short-circuited as permanent failures,
causing rate-limited (429) and timed-out (408) batches to be silently
dropped.  These are transient conditions that should go through the
existing retry loop.

TODO: parse Retry-After header on 429 to add proper backoff.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix(serverless-compat): warn when log agent flush fails

Previously the boolean returned by LogFlusher::flush() was silently
discarded, giving operators no signal when logs were being dropped.
Now logs a warning on each failed flush cycle.

TODO: expose as a statsd counter/gauge for durable telemetry.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix(log-server): accept chunked requests by checking Content-Length header instead of size_hint

size_hint().upper() returns None for chunked/streaming bodies; coercing
that to u64::MAX caused every request without a Content-Length header to
be rejected with 413 before any body bytes were read.

Replace the pre-read guard with a direct Content-Length header parse:
reject early only when the header is present and exceeds MAX_BODY_BYTES,
and fall through to the post-read bytes.len() check otherwise.

Adds a regression test that sends a raw Transfer-Encoding: chunked
request (no Content-Length) via TcpStream and asserts 200 + correct
aggregator insertion.

* chore(log-agent): change default log intake port to 10517 and document bind-failure gap

- Extract DEFAULT_LOG_INTAKE_PORT = 10517 constant (was hardcoded 8080)
- Add TODO explaining that LogServer::serve binds inside the spawned task,
  so a port-conflict failure is silently swallowed while the caller still
  returns Some(...) and logs "log agent started"

* feat(log-agent): send additional endpoints concurrently in flush()

- Replace sequential for-loop over additional_endpoints with join_all()
- Add futures crate dependency for join_all
- Add unit tests: one verifying all endpoints receive the batch, one
  using Barrier(2) to prove concurrent in-flight dispatch

Rationale: LogFlusherConfig documented additional_endpoints as shipped
"in parallel" but the implementation was sequential — this aligns the
implementation with the documented contract

This commit made by [/dd:git:commit:quick](https://github.com/DataDog/claude-marketplace/tree/main/dd/commands/git/commit/quick.md)

* perf(log-agent): dispatch all log batches concurrently in flush()

- Replace sequential for-batch loop with join_all over all batches
- Each batch now ships to primary and extras concurrently in parallel
- Collect per-batch primary results via join_all then fold with .all()

Rationale: multiple batches were flushed one at a time; concurrent
dispatch reduces total flush latency when the aggregator produces
more than one batch

This commit made by [/dd:git:commit:quick](https://github.com/DataDog/claude-marketplace/tree/main/dd/commands/git/commit/quick.md)

* fix(log-agent): drain response body after each HTTP send

- Call resp.bytes().await after receiving a response to consume the body
- Ensures the TCP connection is returned to the pool instead of lingering
  in CLOSE_WAIT, which would exhaust the connection pool under high flush
  frequency

Rationale: reqwest reuses connections only after the response body is
fully consumed; skipping this keeps connections open unnecessarily

This commit made by [/dd:git:commit:quick](https://github.com/DataDog/claude-marketplace/tree/main/dd/commands/git/commit/quick.md)

* feat(log-agent): per-endpoint API key for additional_endpoints

- Add LogsAdditionalEndpoint {api_key, url, is_reliable} matching the
  bottlecap/datadog-agent wire format (Host+Port deserialized to url)
- Add parse_additional_endpoints() and read DD_LOGS_CONFIG_ADDITIONAL_ENDPOINTS
  in LogFlusherConfig::from_env()
- Update ship_batch to accept explicit api_key param so each additional
  endpoint uses its own key instead of the primary key
- Re-export LogsAdditionalEndpoint from crate root
- Update all test fixtures to use the new struct

Rationale: aligns with the datadog-lambda-extension bottlecap model where
each additional endpoint authenticates independently with its own API key

* fix(log-agent): fall back to uncompressed on zstd compression failure

- Replace ? propagation with match in ship_batch compression block
- On compress error, warn and send raw bytes without Content-Encoding header
- Avoids dropping the batch entirely due to a transient encoder failure

Rationale: compression failures are rare (OOM, corrupted encoder state)
and silently dropping the batch is worse than sending it uncompressed

* feat(log-agent): implement cross-invocation retry via RequestBuilder passback

- Change flush() -> bool to flush(Vec<RequestBuilder>) -> Vec<RequestBuilder>
- send_with_retry returns Err(builder) on transient exhaustion instead of FlushError
- serverless-compat flush loop stores and redrives failed builders each cycle
- Additional endpoint failures remain best-effort (not tracked for retry)
- Add tests: cross-invocation redrive succeeds, additional endpoint failures excluded

Rationale: aligns with bottlecap FlushingService retry pattern; batches that
hit transient intake errors survive across Lambda invocations instead of being
silently dropped after MAX_FLUSH_ATTEMPTS

This commit made by [/dd:git:commit:quick](https://github.com/DataDog/claude-marketplace/tree/main/dd/commands/git/commit/quick.md)

* fix(log-agent): account for JSON framing bytes in batch size checks

- Include `[`/`]` (2 bytes) and comma separators in `is_full()` and `get_batch()` overflow guards
- Batch wire size could silently exceed MAX_CONTENT_BYTES by up to N+1 bytes

Rationale: JSON array framing is part of the wire payload but was not counted in the 5 MB cap checks

This commit made by [/dd:git:commit:quick](https://github.com/DataDog/claude-marketplace/tree/main/dd/commands/git/commit/quick.md)

* test(log-agent): replace tautological OPW URL test with real behavior check

- Old test only asserted String::new() is empty — never called production code
- New test calls start_log_agent() with OPW enabled + empty URL and asserts None

Rationale: the test was a no-op; it now exercises the actual guard in start_log_agent()

This commit made by [/dd:git:commit:quick](https://github.com/DataDog/claude-marketplace/tree/main/dd/commands/git/commit/quick.md)

* test(log-agent): remove no-op compile-time size_of test

- Delete _assert_log_flusher_constructible and its unused imports
- size_of checks never fail and provide no constructibility guarantee

Rationale: cargo check and existing integration tests already cover API visibility; the dead function only created maintenance noise

This commit made by [/dd:git:commit:quick](https://github.com/DataDog/claude-marketplace/tree/main/dd/commands/git/commit/quick.md)

* refactor(log-agent): rename LogEntry::new to from_message; drop compression_level range from docs

- Rename LogEntry::new → LogEntry::from_message and update all call sites
- Remove inaccurate "1–21" range from compression_level doc comment

Rationale: new() implies all fields are provided; from_message makes the partial construction explicit per Rust API Guidelines

This commit made by [/dd:git:commit:quick](https://github.com/DataDog/claude-marketplace/tree/main/dd/commands/git/commit/quick.md)

* refactor(log-agent): rename LogEntry → IntakeEntry and log_entry → intake_entry

- Rename struct, file, and all call sites across the codebase
- Name now references the Datadog Logs Intake API format explicitly

Rationale: reviewer feedback — the name should reflect the Intake Log format; LogEntry was too generic

This commit made by [/dd:git:commit:quick](https://github.com/DataDog/claude-marketplace/tree/main/dd/commands/git/commit/quick.md)

* refactor(log-agent): drop Log prefix from Aggregator and AggregatorCommand

- Rename LogAggregator → Aggregator and LogAggregatorCommand → AggregatorCommand
- No need for Log prefix inside the logs crate

Rationale: reviewer feedback — redundant Log prefix within datadog-log-agent crate

This commit made by [/dd:git:commit:quick](https://github.com/DataDog/claude-marketplace/tree/main/dd/commands/git/commit/quick.md)

* refactor(log-agent): rename FlusherMode → Destination; add #[must_use] to from_env

- FlusherMode renamed to Destination — it describes where logs are sent, not how
- #[must_use] added to LogFlusherConfig::from_env() to catch ignored return values

Rationale: reviewer feedback — FlusherMode name is misleading; Destination is more accurate

This commit made by [/dd:git:commit:quick](https://github.com/DataDog/claude-marketplace/tree/main/dd/commands/git/commit/quick.md)

* chore: fix all clippy and fmt warnings across workspace

- Wrap env::set_var/remove_var in unsafe blocks (Rust 2024 requirement)
- Collapse nested if-let + if into let-chain patterns
- Replace match Ok/Err with .err(), if let Err(_) with .is_err()
- Use .is_multiple_of() and assert!(!…) idioms
- Remove redundant as u32 casts on already-u32 fields
- Suppress result_large_err for external figment::Error in test modules
- Suppress disallowed_methods for reqwest::Client::builder in tests

Rationale: cargo clippy --workspace --all-targets and cargo fmt reported errors and warnings that needed to be resolved

This commit made by [/dd:git:commit:quick](https://github.com/DataDog/claude-marketplace/tree/main/dd/commands/git/commit/quick.md)

* feat(datadog-log-agent): add LogEntry and FlusherMode type aliases

- Add `pub type LogEntry = IntakeEntry` alias
- Add `pub type FlusherMode = Destination` alias

Rationale: bottlecap's datadog-log-agent adoption (SVLS-8573) was
written against these names. The crate used IntakeEntry/Destination
but never exported the aliases the plan specified, leaving bottlecap
unable to compile against the crate.

* revert(datadog-log-agent): remove LogEntry/FlusherMode type aliases

Bottlecap now uses IntakeEntry and Destination directly,
so the aliases are not needed.

* refactor(log-agent): rename crate datadog-log-agent → datadog-logs-agent

- Renamed crates/datadog-log-agent/ directory to crates/datadog-logs-agent/
- Updated package name in crate Cargo.toml
- Updated dependency reference in datadog-serverless-compat/Cargo.toml
- Updated all use datadog_log_agent:: identifiers in main.rs, examples, and tests
- Updated references in scripts/test-log-intake.sh and AGENTS.md

Rationale: Align crate name with the plural "logs" convention used elsewhere in the codebase.

This commit made by [/dd:git:commit:quick](https://github.com/DataDog/claude-marketplace/tree/main/dd/commands/git/commit/quick.md)

* fix(log-agent): fix OPW compression bleed and add TODO comments for follow-up fixes

- flusher.rs: rename use_compression → primary_use_compression; pass
  config.use_compression to additional endpoints so they honour
  DD_LOGS_CONFIG_USE_COMPRESSION regardless of OPW primary destination.
  Add DD-PROTOCOL header via is_datadog_intake flag instead of checking
  mode directly, so additional endpoints always get the header.
  Add tests: test_opw_primary_additional_endpoint_receives_dd_protocol_header,
  test_opw_primary_additional_endpoint_compresses_when_enabled,
  test_opw_primary_never_compressed_even_when_flag_set.

- server.rs: add TODO(SVLS-chunked-body-limit) describing the chunked
  transfer body-size bypass and the Limited-based fix to implement later.

- main.rs: replace vague TODO with TODO(SVLS-bind-fail-fast) summarising
  the bind-fail-fast fix (BoundLogServer split + async start_log_agent).

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
The npm token is now handled via the setup-node action's registry-url,
making the explicit `npm config set` step redundant.

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
…n trait (#111)

* feat(agent-config): allow extensible configuration via ConfigExtension trait

Introduces a generic `Config<E: ConfigExtension>` type that lets consumers
define additional configuration fields without modifying or copy-pasting the
core crate. Includes a unified `Source` type for dual extraction from both
env vars and YAML, a `merge_fields!` macro to reduce merge boilerplate, and
moves Lambda-specific fields out of the core Config struct.

Also restructures the crate to use a conventional `src/` layout and adds a
README documenting the extension API.

* refactor(agent-config): organize crate into sources/ and deserializers/ modules

Move config source implementations (env, yaml) into `src/sources/` and
type definitions with custom deserialization into `src/deserializers/`.
Re-exports at the crate root preserve all existing import paths.

* refactor(agent-config): move inline deserializer helpers to deserializers/helpers.rs

Extracts all generic deserializer functions (deserialize_optional_string,
deserialize_with_default, duration parsers, key-value parsers, etc.) from
lib.rs into src/deserializers/helpers.rs. Re-exported at the crate root
so all existing import paths continue to work.

* refactor(agent-config): reorder lib.rs so Config struct is visible first

Reorganize lib.rs so an engineer opening the file immediately sees the
Config struct and its fields, followed by the loading entry points, then
the extension trait, builder, and macros. Sections are separated with
headers for quick scanning.

* fix(agent-config): make NoExtensionSource deserialize from map data

Change NoExtensionSource from a unit struct to an empty struct so serde
accepts map-shaped data from figment (env vars / YAML) instead of
expecting null/unit. Prevents spurious warning logs on every get_config()
call when no extension is used.

* docs(agent-config): improve ConfigExtension trait and Source docs

Address reviewer feedback: document field name collision behavior,
clarify Source type requirements and their runtime failure modes,
and expand the README with collision and flat-field explanations.
* update crates for cargo audit

* apply clippy fixes

* apply formatting, update license
* Update libdatadog rev

* Update rustls-native-certs

* Update stats_flusher to use DefaultHttpClient

* Adapt trace flusher to the new trait-based API

* Add FUNCTION_TARGET to unit tests, update license, update rustls-webpki
@dd-octo-sts
Copy link
Copy Markdown
Author

dd-octo-sts Bot commented Apr 22, 2026

⚠️ Artifact update problem

Renovate failed to update an artifact related to this branch. You probably do not want to merge this PR as-is.

♻ Renovate will retry this branch, including artifacts, only when one of the following happens:

  • any of the package files in this branch needs updating, or
  • the branch becomes conflicted, or
  • you click the rebase/retry checkbox if found above, or
  • you rename this PR's title to start with "rebase!" to trigger it manually

The artifact failure details are included below:

File name: Cargo.lock
Command failed: cargo update --config net.git-fetch-with-cli=true --manifest-path crates/datadog-trace-agent/Cargo.toml --workspace
error: rustup could not choose a version of cargo to run, because one wasn't specified explicitly, and no default is configured.
help: run 'rustup default stable' to download the latest stable release of Rust and set it as your default toolchain.

@dd-octo-sts dd-octo-sts Bot force-pushed the engraver-auto-version-upgrade/renovate/libdd-trace-protobuf-digest branch from b622626 to da37474 Compare April 23, 2026 13:09
@dd-octo-sts dd-octo-sts Bot changed the title chore(deps): update libdd-trace-protobuf digest to 530cd96 chore(deps): update libdd-trace-protobuf digest to 950562b Apr 23, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants